Electronic Colloquium on Computational Complexity Bounds for the Computational Power and Learning Complexity of Analog Neural Nets
نویسنده
چکیده
It is shown that high order feedforward neural nets of constant depth with piecewise polynomial activation functions and arbitrary real weights can be simulated for boolean inputs and outputs by neural nets of a somewhat larger size and depth with heaviside gates and weights from f?1; 0; 1g. This provides the rst known upper bound for the computational power of the former type of neural nets. It is also shown that in the case of rst order nets with piecewise linear activation functions one can replace arbitrary real weights by rational numbers with polynomially many bits, without changing the boolean function that is computed by the neural net. In order to prove these results we introduce two new methods for reducing nonlinear problems about weights in multi-layer neural nets to linear problems for a transformed set of parameters. These transformed parameters can be interpreted as weights in a somewhat larger neural net. As another application of our new proof technique we show that neural nets with piecewise polynomial activation functions and a constant number of analog inputs are probably approximately learnable (in Valiant's model for PAC-learning). subject "MAIL ME CLEAR", body "pub/eccc/ftpmail.txt"
منابع مشابه
The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials
Recently one has started to investigate the computational power of spiking neurons (also called \integrate and re neurons"). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in arti cial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In partic...
متن کاملElectronic Colloquium on Computational Complexity Agnostic Pac-learning of Functions on Analog Neural Nets
We consider learning on multi-layer neural nets with piecewise polynomial activation functions and a xed number k of numerical inputs. We exhibit arbitrarily large network architectures for which eecient and provably successful learning algorithms exist in the rather realistic reenement of Valiant's model for probably approximately correct learning (\PAC-learning") where no a-priori assumptions...
متن کاملLower bounds over Boolean inputs for deep neural networks with ReLU gates
Motivated by the resurgence of neural networks in being able to solve complex learning tasks we undertake a study of high depth networks using ReLU gates which implement the function x 7→ max{0, x}. We try to understand the role of depth in such neural networks by showing size lowerbounds against such network architectures in parameter regimes hitherto unexplored. In particular we show the foll...
متن کاملVC Dimension of Sigmoidal and General Pfaffian Networks
We introduce a new method for proving explicit upper bounds on the VC Dimension of general functional basis networks, and prove as an application, for the rst time, that the VC Dimension of analog neural networks with the sigmoidal activation function (y) = 1=1+e ?y is bounded by a quadratic polynomial O((lm) 2) in both the number l of programmable parameters, and the number m of nodes. The pro...
متن کاملCryptographic Hardness Results for Learning Intersections of Halfspaces
We give the first representation-independent hardness results for PAC learning intersections of halfspaces, a central concept class in computational learning theory. Our hardness results are derived from two public-key cryptosystems due to Regev, which are based on the worstcase hardness of well-studied lattice problems. Specifically, we prove that a polynomial-time algorithm for PAC learning i...
متن کامل